Exploring dense matching between the current frame and past frames for long-range context modeling, memory-based methods have demonstrated impressive results in video object segmentation (VOS) recently. Nevertheless, due to the lack of instance understanding ability, the above approaches are oftentimes brittle to large appearance variations or viewpoint changes resulted from the movement of objects and cameras. In this paper, we argue that instance understanding matters in VOS, and integrating it with memory-based matching can enjoy the synergy, which is intuitively sensible from the definition of VOS task, \ie, identifying and segmenting object instances within the video. Towards this goal, we present a two-branch network for VOS, where the query-based instance segmentation (IS) branch delves into the instance details of the current frame and the VOS branch performs spatial-temporal matching with the memory bank. We employ the well-learned object queries from IS branch to inject instance-specific information into the query key, with which the instance-augmented matching is further performed. In addition, we introduce a multi-path fusion block to effectively combine the memory readout with multi-scale features from the instance segmentation decoder, which incorporates high-resolution instance-aware features to produce final segmentation results. Our method achieves state-of-the-art performance on DAVIS 2016/2017 val (92.6% and 87.1%), DAVIS 2017 test-dev (82.8%), and YouTube-VOS 2018/2019 val (86.3% and 86.3%), outperforming alternative methods by clear margins.
translated by 谷歌翻译
本文介绍了Omnivl,这是一种新的基础模型,旨在使用一种通用体系结构来支持图像语言和视频语言任务。它为图像和视频输入采用了统一的基于变压器的视觉编码器,因此可以执行联合图像语言和视频语言预处理。我们首次证明了这样的范式受益于图像和视频任务,而不是传统的单向传输(例如,使用图像语言来帮助视频语言)。为此,我们提出了对图像语言和视频语言的脱钩关节预处理,以有效地将视觉模型分解为空间和时间维度,并在图像和视频任务上获得性能提升。此外,我们引入了一种新颖的统一视觉对比度(UNIVLC)损失,以利用图像文本,视频文本,图像标签(例如,图像分类),视频标签(例如,视频动作识别)在一起受到监督和吵闹的监督预处理数据都尽可能多地利用。无需额外的任务适配器,Omnivl可以同时支持仅视觉任务(例如,图像分类,视频操作识别),跨模式对齐任务(例如,图像/视频 - 文本检索)和多模式理解和生成任务(例如,图像/视频问答,字幕)。我们在各种下游任务上评估Omnivl,并以相似的模型大小和数据量表获得最新的或竞争结果。
translated by 谷歌翻译
我们试图将广泛的神经网络的非线性建模功能与模型预测控制(MPC)的安全保证相结合,并在严格的在线计算框架中。可以使用Koopman运算符捕获所考虑的网络类,并将其集成到基于Koopman的跟踪MPC(KTMPC)中,以用于非线性系统以跟踪分段常数引用。原始非线性动力学与其训练有素的Koopman线性模型之间模型不匹配的影响是通过在建议的跟踪MPC策略中使用约束拧紧方法来处理的。通过选择两个Lyapunov候选功能,我们证明解决方案是可行的,并且在存在有限的建模错误的情况下,在线和离线最佳可触发稳定输出均具有稳定的输入到状态。最后,我们展示了一个数值示例的结果以及自动地面车辆在跟踪给定参考文献中的应用。
translated by 谷歌翻译
伪装的对象检测(COD),将其优雅地融合到周围环境中的对象是一项有价值但充满挑战的任务。现有的深度学习方法通常陷入具有完整和精细的对象结构准确识别伪装对象的困难。为此,在本文中,我们提出了一个新颖的边界引导网络(BGNET),以用于伪装对象检测。我们的方法探索了有价值的和额外的对象相关的边缘语义,以指导COD的表示形式学习,这迫使模型生成突出对象结构的特征,从而促进了精确边界定位的伪装对象检测。对三个具有挑战性的基准数据集进行的广泛实验表明,我们的BGNET在四个广泛使用的评估指标下的现有18种最新方法明显优于现有的18种最新方法。我们的代码可在以下网址公开获取:https://github.com/thograce/bgnet。
translated by 谷歌翻译
Question Answering (QA) is a longstanding challenge in natural language processing. Existing QA works mostly focus on specific question types, knowledge domains, or reasoning skills. The specialty in QA research hinders systems from modeling commonalities between tasks and generalization for wider applications. To address this issue, we present ProQA, a unified QA paradigm that solves various tasks through a single model. ProQA takes a unified structural prompt as the bridge and improves the QA-centric ability by structural prompt-based pre-training. Through a structurally designed prompt-based input schema, ProQA concurrently models the knowledge generalization for all QA tasks while keeping the knowledge customization for every specific QA task. Furthermore, ProQA is pre-trained with structural prompt-formatted large-scale synthesized corpus, which empowers the model with the commonly-required QA ability. Experimental results on 11 QA benchmarks demonstrate that ProQA consistently boosts performance on both full data fine-tuning, few-shot learning, and zero-shot testing scenarios. Furthermore, ProQA exhibits strong ability in both continual learning and transfer learning by taking the advantages of the structural prompt.
translated by 谷歌翻译
Federated Learning是一种机器学习培训范式,它使客户能够共同培训模型而无需共享自己的本地化数据。但是,实践中联合学习的实施仍然面临许多挑战,例如由于重复的服务器 - 客户同步以及基于SGD的模型更新缺乏适应性,大型通信开销。尽管已经提出了各种方法来通过梯度压缩或量化来降低通信成本,并且提出了联合版本的自适应优化器(例如FedAdam)来增加适应性,目前的联合学习框架仍然无法立即解决上述挑战。在本文中,我们提出了一种具有理论融合保证的新型沟通自适应联合学习方法(FedCAMS)。我们表明,在非convex随机优化设置中,我们提出的fedcams的收敛率与$ o(\ frac {1} {\ sqrt {tkm}})$与其非压缩的对应物相同。各种基准的广泛实验验证了我们的理论分析。
translated by 谷歌翻译
时空表示学习对于视频自我监督的表示至关重要。最近的方法主要使用对比学习和借口任务。然而,这些方法通过在潜在空间中的特征相似性判断所学习表示的中间状态的同时通过潜伏空间中的特征相似性来学习表示,这限制了整体性能。在这项工作中,考虑到采样实例的相似性作为中级状态,我们提出了一种新的借口任务 - 时空 - 时间重叠速率(Stor)预测。它源于观察到,人类能够区分空间和时间在视频中的重叠率。此任务鼓励模型区分两个生成的样本的存储来学习表示。此外,我们采用了联合优化,将借口任务与对比学习相结合,以进一步增强时空表示学习。我们还研究了所提出的计划中每个组分的相互影响。广泛的实验表明,我们的拟议Stor任务可以赞成对比学习和借口任务。联合优化方案可以显着提高视频理解中的时空表示。代码可在https://github.com/katou2/cstp上获得。
translated by 谷歌翻译
提示调整(PT)是一个有前途的参数高效的方法,可以利用极大的预先培训的语言模型(PLM),它可以通过仅调整几个软提示来实现与全参数微调的可比性。但是,与微调相比,PT经验需要更多的培训步骤。为了探索我们通过重用培训的软提示和分享知识来提高PT的效率,我们经验探讨了在不同任务和模型中的软提示的可转换性。在交叉任务传输中,我们发现训练有素的软提示可以转移到类似的任务并初始化PT,以加速培训并提高性能。此外,为了探讨影响的因素,提示跨任务的可转移性,我们调查如何测量提示相似性,并发现激活神经元的重叠率与可转移性高度相关。在跨模型传输中,我们探索如何将PLM的提示投影到另一个PLM并成功培训了一种可以在类似任务上实现非琐碎的传输性能的投影仪。但是,使用预计提示初始化PT不起作用,这可能是由优化偏好和PLMS高冗余引起的。我们的研究结果表明,具有知识转移的改善PT是可能的并且有希望的,而提示的交叉任务转移性通常比跨模型转移性更好。
translated by 谷歌翻译
由于培训数据集的大小爆炸,分布式学习近年来受到了日益增长的兴趣。其中一个主要瓶颈是中央服务器和本地工人之间的沟通成本。虽然已经证明错误反馈压缩以通过随机梯度下降(SGD)降低通信成本,但在培训大规模机器学习方面广泛用于培训的通信有效的适应性梯度方法楷模。在本文中,我们提出了一种新的通信 - 压缩AMSGRAD,用于分布式非透明的优化问题,可提供有效的效率。我们所提出的分布式学习框架具有有效的渐变压缩策略和工人侧模型更新设计。我们证明所提出的通信有效的分布式自适应梯度方法会聚到具有与随机非凸化优化设置中的未压缩的vanilla amsgrad相同的迭代复杂度的一阶静止点。关于各种基准备份我们理论的实验。
translated by 谷歌翻译
We introduce a new tool for stochastic convex optimization (SCO): a Reweighted Stochastic Query (ReSQue) estimator for the gradient of a function convolved with a (Gaussian) probability density. Combining ReSQue with recent advances in ball oracle acceleration [CJJJLST20, ACJJS21], we develop algorithms achieving state-of-the-art complexities for SCO in parallel and private settings. For a SCO objective constrained to the unit ball in $\mathbb{R}^d$, we obtain the following results (up to polylogarithmic factors). We give a parallel algorithm obtaining optimization error $\epsilon_{\text{opt}}$ with $d^{1/3}\epsilon_{\text{opt}}^{-2/3}$ gradient oracle query depth and $d^{1/3}\epsilon_{\text{opt}}^{-2/3} + \epsilon_{\text{opt}}^{-2}$ gradient queries in total, assuming access to a bounded-variance stochastic gradient estimator. For $\epsilon_{\text{opt}} \in [d^{-1}, d^{-1/4}]$, our algorithm matches the state-of-the-art oracle depth of [BJLLS19] while maintaining the optimal total work of stochastic gradient descent. We give an $(\epsilon_{\text{dp}}, \delta)$-differentially private algorithm which, given $n$ samples of Lipschitz loss functions, obtains near-optimal optimization error and makes $\min(n, n^2\epsilon_{\text{dp}}^2 d^{-1}) + \min(n^{4/3}\epsilon_{\text{dp}}^{1/3}, (nd)^{2/3}\epsilon_{\text{dp}}^{-1})$ queries to the gradients of these functions. In the regime $d \le n \epsilon_{\text{dp}}^{2}$, where privacy comes at no cost in terms of the optimal loss up to constants, our algorithm uses $n + (nd)^{2/3}\epsilon_{\text{dp}}^{-1}$ queries and improves recent advancements of [KLL21, AFKT21]. In the moderately low-dimensional setting $d \le \sqrt n \epsilon_{\text{dp}}^{3/2}$, our query complexity is near-linear.
translated by 谷歌翻译